14 research outputs found

    DeepFactors: Real-time probabilistic dense monocular SLAM

    Get PDF
    The ability to estimate rich geometry and camera motion from monocular imagery is fundamental to future interactive robotics and augmented reality applications. Different approaches have been proposed that vary in scene geometry representation (sparse landmarks, dense maps), the consistency metric used for optimising the multi-view problem, and the use of learned priors. We present a SLAM system that unifies these methods in a probabilistic framework while still maintaining real-time performance. This is achieved through the use of a learned compact depth map representation and reformulating three different types of errors: photometric, reprojection and geometric, which we make use of within standard factor graph software. We evaluate our system on trajectory estimation and depth reconstruction on real-world sequences and present various examples of estimated dense geometry

    In-place scene labelling and understanding with implicit scene representation

    Get PDF
    Semantic labelling is highly correlated with geometry and radiance reconstruction, as scene entities with similar shape and appearance are more likely to come from similar classes. Recent implicit neural reconstruction techniques are appealing as they do not require prior training data, but the same fully self-supervised approach is not possible for semantics because labels are human-defined properties.We extend neural radiance fields (NeRF) to jointly encode semantics with appearance and geometry, so that complete and accurate 2D semantic labels can be achieved using a small amount of in-place annotations specific to the scene. The intrinsic multi-view consistency and smoothness of NeRF benefit semantics by enabling sparse labels to efficiently propagate. We show the benefit of this approach when labels are either sparse or very noisy in room-scale scenes. We demonstrate its advantageous properties in various interesting applications such as an efficient scene labelling tool, novel semantic view synthesis, label denoising, super-resolution, label interpolation and multi-view semantic label fusion in visual semantic mapping systems

    Learning meshes for dense visual SLAM

    Get PDF
    Estimating motion and surrounding geometry of a moving camera remains a challenging inference problem. From an information theoretic point of view, estimates should get better as more information is included, such as is done in dense SLAM, but this is strongly dependent on the validity of the underlying models. In the present paper, we use triangular meshes as both compact and dense geometry representation. To allow for simple and fast usage, we propose a view-based formulation for which we predict the in-plane vertex coordinates directly from images and then employ the remaining vertex depth components as free variables. Flexible and continuous integration of information is achieved through the use of a residual based inference technique. This so-called factor graph encodes all information as mapping from free variables to residuals, the squared sum of which is minimised during inference. We propose the use of different types of learnable residuals, which are trained end-to-end to increase their suitability as information bearing models and to enable accurate and reliable estimation. Detailed evaluation of all components is provided on both synthetic and real data which confirms the practicability of the presented approach

    The concurrent decline of soil lead and children’s blood lead in New Orleans

    Full text link
    Lead (Pb) is extremely toxic and a major cause of chronic diseases worldwide. Pb is associated with health disparities, particularly within low-income populations. In biological systems, Pb mimics calcium and, among other effects, interrupts cell signaling. Furthermore, Pb exposure results in epigenetic changes that affect multigenerational gene expression. Exposure to Pb has decreased through primary prevention, including removal of Pb solder from canned food, regulating lead-based paint, and especially eliminating Pb additives in gasoline. While researchers observe a continuous decline in children’s blood lead (BPb), reservoirs of exposure persist in topsoil, which stores the legacy dust from leaded gasoline and other sources. Our surveys of metropolitan New Orleans reveal that median topsoil Pb in communities (n = 274) decreased 44% from 99 mg/kg to 54 mg/kg (P value of 2.09 × 10−08), with a median depletion rate of ∼2.4 mg·kg·y−1 over 15 y. From 2000 through 2005 to 2011 through 2016, children’s BPb declined from 3.6 μg/dL to 1.2 μg/dL or 64% (P value of 2.02 × 10−85), a decrease of ∼0.2 μg·dL·y−1 during a median of 12 y. Here, we explore the decline of children’s BPb by examining a metabolism of cities framework of inputs, transformations, storages, and outputs. Our findings indicate that decreasing Pb in topsoil is an important factor in the continuous decline of children’s BPb. Similar reductions are expected in other major US cities. The most contaminated urban communities, usually inhabited by vulnerable populations, require further reductions of topsoil Pb to fulfill primary prevention for the nation’s children

    DeepFusion: real-time dense 3D reconstruction for monocular SLAM using single-view depth and gradient predictions

    No full text
    While the keypoint-based maps created by sparsemonocular Simultaneous Localisation and Mapping (SLAM)systems are useful for camera tracking, dense 3D recon-structions may be desired for many robotic tasks. Solutionsinvolving depth cameras are limited in range and to indoorspaces, and dense reconstruction systems based on minimisingthe photometric error between frames are typically poorlyconstrained and suffer from scale ambiguity. To address theseissues, we propose a 3D reconstruction system that leverages theoutput of a Convolutional Neural Network (CNN) to producefully dense depth maps for keyframes that include metric scale.Our system, DeepFusion, is capable of producing real-timedense reconstructions on a GPU. It fuses the output of a semi-dense multiview stereo algorithm with the depth and gradientpredictions of a CNN in a probabilistic fashion, using learneduncertainties produced by the network. While the network onlyneeds to be run once per keyframe, we are able to optimise forthe depth map with each new frame so as to constantly makeuse of new geometric constraints. Based on its performanceon synthetic and real world datasets, we demonstrate thatDeepFusion is capable of performing at least as well as othercomparable systems

    BodySLAM: Joint Camera Localisation, Mapping, and Human Motion Tracking

    Get PDF
    Estimating human motion from video is an active research area due to its many potential applications. Most state-of-the-art methods predict human shape and posture estimates for individual images and do not leverage the temporal information available in video. Many "in the wild" sequences of human motion are captured by a moving camera, which adds the complication of conflated camera and human motion to the estimation. We therefore present BodySLAM, a monocular SLAM system that jointly estimates the position, shape, and posture of human bodies, as well as the camera trajectory. We also introduce a novel human motion model to constrain sequential body postures and observe the scale of the scene. Through a series of experiments on video sequences of human motion captured by a moving monocular camera, we demonstrate that BodySLAM improves estimates of all human body parameters and camera poses when compared to estimating these separately

    Dense RGB-D-Inertial SLAM with Map Deformations

    No full text

    Towards the probabilistic fusion of learned priors into standard pipelines for 3D reconstruction

    Get PDF
    The best way to combine the results of deep learning with standard 3D reconstruction pipelines remains an open problem. While systems that pass the output of traditional multi-view stereo approaches to a network for regularisation or refinement currently seem to get the best results, it may be preferable to treat deep neural networks as separate components whose results can be probabilistically fused into geometry- based systems. Unfortunately, the error models required to do this type of fusion are not well understood, with many different approaches being put forward. Recently, a few systems have achieved good results by having their networks predict probability distributions rather than single values. We propose using this approach to fuse a learned single-view depth prior into a standard 3D reconstruction system. Our system is capable of incrementally producing dense depth maps for a set of keyframes. We train a deep neural network to predict discrete, nonparametric probability distributions for the depth of each pixel from a single image. We then fuse this "probability volume" with another probability volume based on the photometric consistency between subsequent frames and the keyframe image. We argue that combining the probability volumes from these two sources will result in a volume that is better conditioned. To extract depth maps from the volume, we minimise a cost function that includes a regularisation term based on network predicted surface normals and occlusion boundaries. Through a series of experiments, we demonstrate that each of these components improves the overall performance of the system

    SIMstack: a generative shape and instance model for unordered object stacks

    Get PDF
    By estimating 3D shape and instances from a single view, we can capture information about an environment quickly, without the need for comprehensive scanning and multi-view fusion. Solving this task for composite scenes (such as object stacks) is challenging: occluded areas are not only ambiguous in shape but also in instance segmentation; multiple decompositions could be valid. We observe that physics constrains decomposition as well as shape in occluded regions and hypothesise that a latent space learned from scenes built under physics simulation can serve as a prior to better predict shape and instances in occluded regions. To this end we propose SIMstack, a depth-conditioned Variational Auto-Encoder (VAE), trained on a dataset of objects stacked under physics simulation. We formulate instance segmentation as a centre voting task which allows for class-agnostic detection and doesn’t require setting the maximum number of objects in the scene. At test time, our model can generate 3D shape and instance segmentation from a single depth view, probabilistically sampling proposals for the occluded region from the learned latent space. Our method has practical applications in providing robots some of the ability humans have to make rapid intuitive inferences of partially observed scenes. We demonstrate an application for precise (non-disruptive) object grasping of unknown objects from a single depth view
    corecore